Whenever you make an estimation or measurement, your estimated or measured value can

differ from the truth by being inaccurate, imprecise, or both.

Accuracy refers to how close your measurement tends to come to the true value, without being

systematically biased in one direction or another. Such a bias is called a systematic error.

Precision refers to how close several replicate measurements come to each other — that is, how

reproducible they are.

In estimation, both random and systematic errors reduce precision and accuracy. You cannot control

random error, but you can control systematic error by improving your measurement methods. Consider

the four different situations that can arise if you take multiple measurements from the same population:

High precision and high accuracy is an ideal result. It means that each measurement you take is

close to the others, and all of these are close to the true population value.

High precision and low accuracy is not as ideal. This is where repeat measurements tend to be

close to one another, but are not that close to the true value. This situation can when you ask survey

respondents to self-report their weight. The average of the answers may be similar survey after

survey, but the answers may be inaccurately lower than truth. Although it is easy to predict what

the next measurement will be, the measurement is less useful if it does not help you know the true

value. This indicates you may want to improve your measurement methods.

Low precision and high accuracy is also not as ideal. This is where the measurements are not that

close to one another, but are not that far from the true population value. In this case, you may trust

your measurements, but find that it is hard to predict what the next one will be due to random error.

Low precision and low accuracy shows the least ideal result, which is a low level of both

precision and accuracy. This can only be improved through improving measurement methods.

Sampling distributions and standard errors

The standard error (abbreviated SE) is one way to indicate the level of precision about an

estimate or measurement from a sample. The SE tells you how much the estimate or measured

value may vary if you were to repeat the experiment or the measurement many times using a

different random sample from the same population each time, and recording the value you

obtained each time. This collection of numbers would have a spread of values, forming what is

called the sampling distribution for that variable. The SE is a measure of the width of the

sampling distribution, as described in Chapter 9.

Fortunately, you don’t have to repeat the entire experiment a large number of times to calculate the SE.

You can usually estimate the SE using data from a single experiment by using confidence intervals.

Confidence intervals

An important application of statistical estimation theory in biostatistics is calculating confidence